30 research outputs found

    Darwinian Dialectics

    Get PDF
    Review of: Robert J. Richards and Michael Ruse: Debating Darwin. University of Chicago Press, Chicago. 2016. ISBN: 9780226384429, 320 pages, price: $30.00 (hardcover

    Affording illusions? Natural Information and the Problem of Misperception

    Get PDF
    There are two related points at which J.J. Gibson’s ecological theory of visual perception remains remarkably underspecified: Firstly, the notion of information for perception is not explicated in much detail beyond the claim that it “specifies” the environment for perception, and, thus being an objective affair, enables an organism to perceive action possibilities or “affordances.” Secondly, misperceptions of affordances and perceptual illusions are not clearly distinguished from each other. Although the first claim seems to suggest that any perceptual illusion amounts to the misperception of affordances, there might be some relevant differences between various ways of getting things wrong. In this essay, Gibson’s notion of “specifying” information shall be reconstructed along the lines of Fred Dretske’s relational theory of information. This refined notion of information for perception will then be used to carve out the distinction between perceptual illusions and the misperception of affordances, with some help from the “Empirical Strategy” (developed by Purves et al.). It will be maintained that there are cases where perceptual illusions actually help an organism to correctly perceive an affordance. In such cases, the prima facie misrendered informational relations involved are kept intact by a set of appropriate transformation rules. Two of Gibson’s intuitions shall thus be preserved: the objectivity of informational relations and the empowerment of the organism as an active perceiver who uses those objective relations to his specific ends

    ‘The Action of the Brain’. Machine Models and Adaptive Functions in Turing and Ashby

    Get PDF
    Given the personal acquaintance between Alan M. Turing and W. Ross Ashby and the partial proximity of their research fields, a comparative view of Turing’s and Ashby’s work on modelling “the action of the brain” (letter from Turing to Ashby, 1946) will help to shed light on the seemingly strict symbolic/embodied dichotomy: While it is clear that Turing was committed to formal, computational and Ashby to material, analogue methods of modelling, there is no straightforward mapping of these approaches onto symbol-based AI and embodiment-centered views respectively. Instead, it will be demonstrated that both approaches, starting from a formal core, were at least partly concerned with biological and embodied phenomena, albeit in revealingly distinct ways

    Invention, Intension and the Extension of the Computational Analogy

    Get PDF
    This short philosophical discussion piece explores the relation between two common assumptions: first, that at least some cognitive abilities, such as inventiveness and intuition, are specifically human and, second, that there are principled limitations to what machine-based computation can accomplish in this respect. In contrast to apparent common wisdom, this relation may be one of informal association. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing: Maintaining a principled difference between the processes involved in human cognition, including practices of computation, and machine computation will crucially depend on the requirement of intensional equivalence. However, this requirement was neither part of Turing's expressly extensionally defined analogy between human and machine computation, nor is it pertinent to the domain of computational modelling. Accordingly, the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation

    Invention, Intension and the Extension of the Computational Analogy

    Get PDF
    This short philosophical discussion piece explores the relation between two common assumptions: first, that at least some cognitive abilities, such as inventiveness and intuition, are specifically human and, second, that there are principled limitations to what machine-based computation can accomplish in this respect. In contrast to apparent common wisdom, this relation may be one of informal association. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing: Maintaining a principled difference between the processes involved in human cognition, including practices of computation, and machine computation will crucially depend on the requirement of intensional equivalence. However, this requirement was neither part of Turing's expressly extensionally defined analogy between human and machine computation, nor is it pertinent to the domain of computational modelling. Accordingly, the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation

    Invention, Intension and the Limits of Computation

    Get PDF
    This is a critical exploration of the relation between two common assumptions in anti-computationalist critiques of Artificial Intelligence: The first assumption is that at least some cognitive abilities are specifically human and non-computational in nature, whereas the second assumption is that there are principled limitations to what machine-based computation can accomplish with respect to simulating or replicating these abilities. Against the view that these putative differences between computation in humans and machines are closely related, this essay argues that the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing and on an inquiry into the scope and nature of human invention in mathematics, and their respective bearing on theories of computation

    Exploring Minds. Modes of Modelling and Simulation in Artificial Intelligence

    Get PDF
    The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. This taxonomy cuts across the traditional dichotomies between symbolic / embodied AI, general intelligence / cognitive simulation and human / non-human-like AI. According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programmes: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI). In continuation of pragmaticist views of the modes of modelling and simulating world affairs (Humphreys, Winsberg), this taxonomy of approaches to modelling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research – and what made that research programme uniquely dependent on them

    Exploring Minds. Modes of Modelling and Simulation in Artificial Intelligence

    Get PDF
    The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. This taxonomy cuts across the traditional dichotomies between symbolic / embodied AI, general intelligence / cognitive simulation and human / non-human-like AI. According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programmes: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI). In continuation of pragmaticist views of the modes of modelling and simulating world affairs (Humphreys, Winsberg), this taxonomy of approaches to modelling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research – and what made that research programme uniquely dependent on them

    Environments of Intelligence

    Get PDF
    What is the role of the environment, and of the information it provides, in cognition? More specifically, may there be a role for certain artefacts to play in this context? These are questions that motivate "4E" theories of cognition (as being embodied, embedded, extended, enactive). In his take on that family of views, Hajo Greif first defends and refines a concept of information as primarily natural, environmentally embedded in character, which had been eclipsed by information-processing views of cognition. He continues with an inquiry into the cognitive bearing of some artefacts that are sometimes referred to as 'intelligent environments'. Without necessarily having much to do with Artificial Intelligence, such artefacts may ultimately modify our informational environments. With respect to human cognition, the most notable effect of digital computers is not that they might be able, or become able, to think but that they alter the way we perceive, think and act. The Open Access version of this book, available at http://www.tandfebooks.com/doi/view/10.4324/9781315401867, has been made available under a Creative Commons CC-BY licenc
    corecore